126 research outputs found

    How accurate models of human behavior are needed for human-robot interaction? For automated driving?

    Get PDF

    Task-Specific Sensor Planning for Robotic Assembly Tasks

    Get PDF
    When performing multi-robot tasks, sensory feedback is crucial in reducing uncertainty for correct execution. Yet the utilization of sensors should be planned as an integral part of the task planning, taken into account several factors such as the tolerance of different inferred properties of the scene and interaction with different agents. In this paper we handle this complex problem in a principled, yet efficient way. We use surrogate predictors based on open-loop simulation to estimate and bound the probability of success for specific tasks. We reason about such task-specific uncertainty approximants and their effectiveness. We show how they can be incorporated into a multi-robot planner, and demonstrate results with a team of robots performing assembly tasks

    Manipulation Planning Using Environmental Contacts to Keep Objects Stable under External Forces

    Get PDF
    This paper addresses the problem of sequential manipulation planning to keep an object stable under changing external forces. Particularly, we focus on using object-environment contacts. We present a planning algorithm which can generate robot configurations and motions to intelligently use object-environment, as well as object-robot, contacts, to keep an object stable under forceful operations such as drilling and cutting. Given a sequence of external forces, the planner minimizes the number of different configurations used to keep the object stable. An important computational bottleneck in this algorithm is due to the static stability analysis of a large number of configurations. We propose a containment relationship between configurations, to prune the stability checking process

    Learning Physics-Based Manipulation in Clutter: Combining Image-Based Generalization and Look-Ahead Planning

    Get PDF
    Physics-based manipulation in clutter involves complex interaction between multiple objects. In this paper, we consider the problem of learning, from interaction in a physics simulator, manipulation skills to solve this multi-step sequential decision making problem in the real world. Our approach has two key properties: (i) the ability to generalize and transfer manipulation skills (over the type, shape, and number of objects in the scene) using an abstract image-based representation that enables a neural network to learn useful features; and (ii) the ability to perform look-ahead planning in the image space using a physics simulator, which is essential for such multi-step problems. We show, in sets of simulated and real-world experiments (video available on https://youtu.be/EmkUQfyvwkY), that by learning to evaluate actions in an abstract image-based representation of the real world, the robot can generalize and adapt to the object shapes in challenging real-world environments

    Physics-Based Object 6D-Pose Estimation during Non-Prehensile Manipulation

    Get PDF
    We propose a method to track the 6D pose of an object over time, while the object is under non-prehensile manipulation by a robot. At any given time during the manipulation of the object, we assume access to the robot joint controls and an image from a camera. We use the robot joint controls to perform a physics-based prediction of how the object might be moving. We then combine this prediction with the observation coming from the camera, to estimate the object pose as accurately as possible. We use a particle filtering approach to combine the control information with the visual information. We compare the proposed method with two baselines: (i) using only an image-based pose estimation system at each time-step, and (ii) a particle filter which does not perform the computationally expensive physics predictions, but assumes the object moves with constant velocity. Our results show that making physics-based predictions is worth the computational cost, resulting in more accurate tracking, and estimating object pose even when the object is not clearly visible to the camera

    Human-like Planning for Reaching in Cluttered Environments

    Get PDF
    Humans, in comparison to robots, are remarkably adept at reaching for objects in cluttered environments. The best existing robot planners are based on random sampling of configuration space- which becomes excessively high-dimensional with large number of objects. Consequently, most planners often fail to efficiently find object manipulation plans in such environments. We addressed this problem by identifying high-level manipulation plans in humans, and transferring these skills to robot planners. We used virtual reality to capture human participants reaching for a target object on a tabletop cluttered with obstacles. From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles. Based on this representation, human demonstrations were segmented and used to train decision classifiers. Using these classifiers, our planner produced a list of waypoints in task space. These waypoints provided a high-level plan, which could be transferred to an arbitrary robot model and used to initialise a local trajectory optimiser. We evaluated this approach through testing on unseen human VR data, a physics-based robot simulation, and a real robot (dataset and code are publicly available 1 ). We found that the human-like planner outperformed a state-of-the-art standard trajectory optimisation algorithm, and was able to generate effective strategies for rapid planning- irrespective of the number of obstacles in the environment

    Occlusion-Robust Autonomous Robotic Manipulation of Human Soft Tissues With 3D Surface Feedback

    Get PDF
    Robotic manipulation of 3D soft objects remains challenging in the industrial and medical fields. Various methods based on mechanical modelling, data-driven approaches or explicit feature tracking have been proposed. A unifying disadvantage of these methods is the high computational cost of simultaneous imaging processing, identification of mechanical properties, and motion planning, leading to a need for less computationally intensive methods. We propose a method for autonomous robotic manipulation with 3D surface feedback to solve these issues. First, we produce a deformation model of the manipulated object, which estimates the robots' movements by monitoring the displacement of surface points surrounding the manipulators. Then, we develop a 6-degree-of-freedom velocity controller to manipulate the grasped object to achieve a desired shape. We validate our approach through comparative simulations with existing methods and experiments using phantom and cadaveric soft tissues with the da Vinci Research Kit. The results demonstrate the robustness of the technique to occlusions and various materials. Compared to state-of-the-art linear and data-driven methods, our approach is more precise by 46.5% and 15.9% and saves 55.2% and 25.7% manipulation time, respectively

    Planning with a Receding Horizon for Manipulation in Clutter using a Learned Value Function

    Get PDF
    Manipulation in clutter requires solving complex sequential decision making problems in an environment rich with physical interactions. The transfer of motion planning solutions from simulation to the real world, in open-loop, suffers from the inherent uncertainty in modelling real world physics. We propose interleaving planning and execution in real-time, in a closed-loop setting, using a Receding Horizon Planner (RHP) for pushing manipulation in clutter. In this context, we address the problem of finding a suitable value function based heuristic for efficient planning, and for estimating the cost-to-go from the horizon to the goal. We estimate such a value function first by using plans generated by an existing sampling-based planner. Then, we further optimize the value function through reinforcement learning. We evaluate our approach and compare it to state-of-the-art planning techniques for manipulation in clutter. We conduct experiments in simulation with artificially injected uncertainty on the physics parameters, as well as in real world tasks of manipulation in clutter. We show that this approach enables the robot to react to the uncertain dynamics of the real world effectively

    Multi-Robot Grasp Planning for Sequential Assembly Operations

    Get PDF
    This paper addresses the problem of finding robot configurations to grasp assembly parts during a sequence of collaborative assembly operations. We formulate the search for such configurations as a constraint satisfaction problem (CSP). Collision constraints in an operation and transfer constraints between operations determine the sets of feasible robot configurations. We show that solving the connected constraint graph with off-the-shelf CSP algorithms can quickly become infeasible even for a few sequential assembly operations. We present an algorithm which, through the assumption of feasible regrasps, divides the CSP into independent smaller problems that can be solved exponentially faster. The algorithm then uses local search techniques to improve this solution by removing a gradually increasing number of regrasps from the plan. The algorithm enables the user to stop the planner anytime and use the current best plan if the cost of removing regrasps from the plan exceeds the cost of executing those regrasps. We present simulation experiments to compare our algorithm's performance to a naive algorithm which directly solves the connected constraint graph. We also present a real robot system which uses the output of our planner to grasp and bring parts together in assembly configurations

    Parareal with a Learned Coarse Model for Robotic Manipulation

    Get PDF
    A key component of many robotics model-based planning and control algorithms is physics predictions, that is, forecasting a sequence of states given an initial state and a sequence of controls. This process is slow and a major computational bottleneck for robotics planning algorithms. Parallel-in-time integration methods can help to leverage parallel computing to accelerate physics predictions and thus planning. The Parareal algorithm iterates between a coarse serial integrator and a fine parallel integrator. A key challenge is to devise a coarse model that is computationally cheap but accurate enough for Parareal to converge quickly. Here, we investigate the use of a deep neural network physics model as a coarse model for Parareal in the context of robotic manipulation. In simulated experiments using the physics engine Mujoco as fine propagator we show that the learned coarse model leads to faster Parareal convergence than a coarse physics-based model. We further show that the learned coarse model allows to apply Parareal to scenarios with multiple objects, where the physics-based coarse model is not applicable. Finally, we conduct experiments on a real robot and show that Parareal predictions are close to real-world physics predictions for robotic pushing of multiple objects. Code (https://doi.org/10.5281/zenodo.3779085) and videos (https://youtu. be/wCh2o1rf-gA) are publicly available
    • …
    corecore